122 research outputs found
PRIS: Practical robust invertible network for image steganography
Image steganography is a technique of hiding secret information inside
another image, so that the secret is not visible to human eyes and can be
recovered when needed. Most of the existing image steganography methods have
low hiding robustness when the container images affected by distortion. Such as
Gaussian noise and lossy compression. This paper proposed PRIS to improve the
robustness of image steganography, it based on invertible neural networks, and
put two enhance modules before and after the extraction process with a 3-step
training strategy. Moreover, rounding error is considered which is always
ignored by existing methods, but actually it is unavoidable in practical. A
gradient approximation function (GAF) is also proposed to overcome the
undifferentiable issue of rounding distortion. Experimental results show that
our PRIS outperforms the state-of-the-art robust image steganography method in
both robustness and practicability. Codes are available at
https://github.com/yanghangAI/PRIS, demonstration of our model in practical at
http://yanghang.site/hide/
GP-NAS-ensemble: a model for NAS Performance Prediction
It is of great significance to estimate the performance of a given model
architecture without training in the application of Neural Architecture Search
(NAS) as it may take a lot of time to evaluate the performance of an
architecture. In this paper, a novel NAS framework called GP-NAS-ensemble is
proposed to predict the performance of a neural network architecture with a
small training dataset. We make several improvements on the GP-NAS model to
make it share the advantage of ensemble learning methods. Our method ranks
second in the CVPR2022 second lightweight NAS challenge performance prediction
track
Sentence Specified Dynamic Video Thumbnail Generation
With the tremendous growth of videos over the Internet, video thumbnails,
providing video content previews, are becoming increasingly crucial to
influencing users' online searching experiences. Conventional video thumbnails
are generated once purely based on the visual characteristics of videos, and
then displayed as requested. Hence, such video thumbnails, without considering
the users' searching intentions, cannot provide a meaningful snapshot of the
video contents that users concern. In this paper, we define a distinctively new
task, namely sentence specified dynamic video thumbnail generation, where the
generated thumbnails not only provide a concise preview of the original video
contents but also dynamically relate to the users' searching intentions with
semantic correspondences to the users' query sentences. To tackle such a
challenging task, we propose a novel graph convolved video thumbnail pointer
(GTP). Specifically, GTP leverages a sentence specified video graph
convolutional network to model both the sentence-video semantic interaction and
the internal video relationships incorporated with the sentence information,
based on which a temporal conditioned pointer network is then introduced to
sequentially generate the sentence specified video thumbnails. Moreover, we
annotate a new dataset based on ActivityNet Captions for the proposed new task,
which consists of 10,000+ video-sentence pairs with each accompanied by an
annotated sentence specified video thumbnail. We demonstrate that our proposed
GTP outperforms several baseline methods on the created dataset, and thus
believe that our initial results along with the release of the new dataset will
inspire further research on sentence specified dynamic video thumbnail
generation. Dataset and code are available at https://github.com/yytzsy/GTP
- …